2023-09-19 14:30:16.AIbase.1.4k
Stronger than GPT-4, 2 Billion Parameter Model Achieves Nearly 100% Accuracy in Arithmetic Tasks
Researchers from Tsinghua University and others have introduced MathGLM, a 2 billion parameter language model that outperforms GPT-4 with nearly 100% accuracy in arithmetic tasks. MathGLM utilizes a decoder-only architecture and has significantly improved its mathematical capabilities through training on a large-scale arithmetic dataset. It also surpasses models like GPT-4 and ChatGPT in handling complex mixed arithmetic operations with intricate numerical formats.